Sparse Greedy Gaussian Process Regression
نویسندگان
چکیده
Peter Bartlett RSISE Australian National University Canberra, ACT, 0200 [email protected] We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n2m), storage is O(nm), the cost for prediction is 0 ( n) and the cost to compute confidence bounds is O(nm), where n «: m. We show how to compute a stopping criterion, give bounds on the approximation error, and show applications to large scale problems.
منابع مشابه
Fast Greedy Insertion and Deletion in Sparse Gaussian Process Regression
In this paper, we introduce a new and straightforward criterion for successive insertion and deletion of training points in sparse Gaussian process regression. Our novel approach is based on an approximation of the selection technique proposed by Smola and Bartlett [1]. It is shown that the resulting selection strategies are as fast as the purely randomized schemes for insertion and deletion of...
متن کاملA Gradient-based Forward Greedy Algorithm for Sparse Gaussian Process Regression
In this chaper, we present a gradient-based forward greedy method for sparse approximation of Bayesian Gaussian Process Regression (GPR) model. Different from previous work, which is mostly based on various basis vector selection strategies, we propose to construct instead of select a new basis vector at each iterative step. This idea was motivated from the well-known gradient boosting approach...
متن کاملA Gradient-Based Forward Greedy Algorithm for Space Gaussian Process Regression
In this chaper, we present a gradient-based forward greedy method for sparse approximation of Bayesian Gaussian Process Regression (GPR) model. Different from previous work, which is mostly based on various basis vector selection strategies, we propose to construct instead of select a new basis vector at each iterative step. This idea was motivated from the well-known gradient boosting approach...
متن کاملFast Forward Selection to Speed Up Sparse Gaussian Process Regression
We present a method for the sparse greedy approximation of Bayesian Gaussian process regression, featuring a novel heuristic for very fast forward selection. Our method is essentially as fast as an equivalent one which selects the “support” patterns at random, yet it can outperform random selection on hard curve fitting tasks. More importantly, it leads to a sufficiently stable approximation of...
متن کاملGreedy Block Coordinate Descent for Large Scale Gaussian Process Regression
We propose a variable decomposition algorithm– greedy block coordinate descent (GBCD)–in order to make dense Gaussian process regression practical for large scale problems. GBCD breaks a large scale optimization into a series of small sub-problems. The challenge in variable decomposition algorithms is the identification of a subproblem (the active set of variables) that yields the largest impro...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000